摘要 :
The paper introduces the idea of generalising a cumulative frequency curve to show arbitrary cumulative counts. For example, in demographic studies generalised cumulative curves can represent the distribution of population or area...
展开
The paper introduces the idea of generalising a cumulative frequency curve to show arbitrary cumulative counts. For example, in demographic studies generalised cumulative curves can represent the distribution of population or area. Generalised cumulative curves can be a valuable instrument for exploratory data analysis. The use of cumulative curves in an investigation of population statistics in Northwest England allowed us to discover interesting facts about relationships between the distribution of national minorities and the degree of deprivation. We detected that, while high concentration of national minorities occurs, in general, in underprivileged districts, there are some differences related to the origin of the minorities. The paper sets the applicability conditions for generalised cumulative curves and compares them with other graphical tools for exploratory data analysis.
收起
摘要 :
What to do with data after generating it using software suchasMicrosoftexceltoanalyseitfurtherevencomplicatedsoftware is used such as tableau. Data visualisation using python is one such method was analysis happens at a further level.
摘要 :
The logs generated during various processes, such as networking and web surfing, can be voluminous. These logs need to be processed and analysed to enhance the system's quality and performance and facilitate proactive fault detect...
展开
The logs generated during various processes, such as networking and web surfing, can be voluminous. These logs need to be processed and analysed to enhance the system's quality and performance and facilitate proactive fault detection and handling. Log analysis software such as Picviz, which was built to show huge volumes of data for the sake of security, is one example. The parallel coordinate system in Picviz allows data to be presented in multiple dimensions. Its main aim is to simplify data analysis and identify correlations among variables. However, representing a large amount of data all at once in the software can lead to congested and clustered lines, making it challenging to distinguish and extract information. To address this issue, we propose two improved methods: 'grouping based on comparison of data' and 'grouping of consecutive data'. In order to simplify the reading experience, these plug-ins collect related lines into sets and present them all at once, making the image more readable and comprehensible. This paper outlines four cases to highlight these plug-ins' significance and potential application scenarios for each case.
收起
摘要 :
Abstract This study investigates how the frictions that emerge while synthesising disparate datasets can be transparently conveyed in a single data visualisation. We encountered this need while being embedded in an academic consor...
展开
Abstract This study investigates how the frictions that emerge while synthesising disparate datasets can be transparently conveyed in a single data visualisation. We encountered this need while being embedded in an academic consortium of four epistemologically-distant scientific teams, who wanted to develop new interdisciplinary hypotheses from their merged datasets. By inviting these scientists to collaboratively develop visualisation prototypes of their data within their own and then towards the other disciplines, we uncovered four data frictions that relate to discipline-specific interpretations of data, methodological approaches, ways of handling data uncertainties, as well as the large differences in dataset scale and granularity. We then recognised how the resulting visualisation prototypes contained several promising techniques that addressed these frictions transparently, such as retaining their overall visualisation context and using visual translators to mediate between differing scales. Driven by critical data discourse that calls for frictions to be foregrounded rather than be occluded, we generalised these techniques into a series of actionable design considerations. While originating from a single case of an interdisciplinary collaboration, we believe that our findings form a crucial step towards enabling a more transparent and accountable interdisciplinary data visualisation practice.
收起
摘要 :
Scientific visualisation of computational or observational data sets in material sciences is essential for studying data sets of ever increasing complexity. Rather than just using and implementing self-contained solutions to addre...
展开
Scientific visualisation of computational or observational data sets in material sciences is essential for studying data sets of ever increasing complexity. Rather than just using and implementing self-contained solutions to address particular problems, a systematic approach for modelling data sets opens the gateway to sharing data sets with other applications and general purpose visualisation frameworks. The fibre bundle data model is a mathematical description encompassing many diverse data types, ranging from molecular dynamics via continuum mechanics describing solid and fluids to finite elements. It fits well to be mapped to the Hierarchical Data Format file format as a widely used data storage container. Still various choices remain on such a data mapping and are reviewed in this article. The fibre bundle data model provides a classification scheme for data sets on an abstract level, disregarding implementation details and therefore eases selecting visualisation methods appropriate for the underlying data. Data are hereby studied via properties of their so called base space and fibre space. The base space is describing properties of the numerical discretion scheme, the fibre space is describing physical quantities. Visual data analysis of both spaces is important, but can be considered widely independent, depending on the need to either study computational or physical aspects of the data. Methods to study the topological structure of the base space complement methods to study scalar, vector and tensor fields and provide a highly systematic approach for scientific visualisation in material sciences.
收起
摘要 :
Organization of companies and their HR departments are becoming hugely affected by recent advancements in computational power and Artificial Intelligence, with this trend likely to dramatically rise in the next few years. This wor...
展开
Organization of companies and their HR departments are becoming hugely affected by recent advancements in computational power and Artificial Intelligence, with this trend likely to dramatically rise in the next few years. This work presents foo.castr, a tool we are developing to visualise, communicate and facilitate the understanding of the impact of these advancements in the future of workforce. It builds upon the idea that particular tasks within job descriptions will be progressively taken by computers, forcing the shaping of human jobs. In its current version, foo.castr presents three different scenarios to help HR departments planning potential changes and disruptions brought by the adoption of Artificial Intelligence.
收起
摘要 :
Linked Vis implements a JavaScript and SVG data visualisation toolkit that can be used to generate a wide range ofinteractive information visualisationsfrom RDF graphs using a grammar ofgraphics style syntax extended with operatio...
展开
Linked Vis implements a JavaScript and SVG data visualisation toolkit that can be used to generate a wide range ofinteractive information visualisationsfrom RDF graphs using a grammar ofgraphics style syntax extended with operations for structural transformation of the RDF data graph. Additionally, LinkedVis visualisations make it possible to embed meta-data about the visualisation and the way different graphic components front the visualisation are related to the original RDF data. Insertion of meta-data transforms the visualisation into a self-describing piece of information that can be processed by an automatic agent to perform different tasks, like extracting data associated to a visual component, following the associated linked URIs or translate the visualisation to an entirely different underlying graphics system other than SVG.
收起
摘要 :
Access to software tools for interactive data reduction, visualisation and analysis during a neutron scattering experiment enables instrument users to make informed decisions regarding the direction and success of their experiment...
展开
Access to software tools for interactive data reduction, visualisation and analysis during a neutron scattering experiment enables instrument users to make informed decisions regarding the direction and success of their experiment. ANSTO aims to enhance the experiment experience of its facility's users by integrating these data reduction tools with the instrument control interface for immediate feedback. GumTree is a software framework and application designed to support an Integrated Scientific Experimental Environment, for concurrent access to instrument control, data acquisition, visualisation and analysis software. The Data Reduction and Analysis (DRA) module is a component of the GumTree framework that allows users to perform data reduction, correction and basic analysis within GumTree while an experiment is running. It is highly integrated with GumTree, able to pull experiment data and metadata directly from the instrument control and data acquisition components. The DRA itself uses components common to all instruments at the facility, providing a consistent interface. It features familiar ISAW-based 1D and 2D plotting, an OpenGL-based 3D plotter and peak fitting performed by fityk. This paper covers the benefits of integration, the flexibility of the DRA module, ease of use for the interface and audit trail generation. (c) 2006 Published by Elsevier B.V.
收起
摘要 :
The Continuous Plankton Recorder (CPR) survey, operated by the Sir Alister Hardy Foundation for Ocean Science (SAHFOS), is the largest plankton monitoring programme in the world and has spanned >70 yr. The dataset contains informa...
展开
The Continuous Plankton Recorder (CPR) survey, operated by the Sir Alister Hardy Foundation for Ocean Science (SAHFOS), is the largest plankton monitoring programme in the world and has spanned >70 yr. The dataset contains information from ~200 000 samples, with over 2.3 million records of individual taxa. Here we outline the evolution of the CPR database through changes in technology, and how this has increased data access. Recent high-impact publications and the expanded role of CPR data in marine management demonstrate the usefulness of the dataset. We argue that solely supplying data to the research community is not sufficient in the current research climate; to promote wider use, additional tools need to be developed to provide visual representation and summary statistics. We outline 2 software visualisation tools, SAHFOS WinCPR and the digital CPR Atlas, which provide access to CPR data for both researchers and non-plankton specialists. We also describe future directions of the database, data policy and the development of visualisation tools. We believe that the approach at SAHFOS to increase data accessibility and provide new visualisation tools has enhanced awareness of the data and led to the financial security of the organisation; it also provides a good model of how long-term monitoring programmes can evolve to help secure their future.
收起
摘要 :
Biosimulation models are used to understand the multiple or different causative factors that cause impairment in human organs. Finite Element Method (FEM) provide a mathematical framework to simulate dynamic biological systems, wi...
展开
Biosimulation models are used to understand the multiple or different causative factors that cause impairment in human organs. Finite Element Method (FEM) provide a mathematical framework to simulate dynamic biological systems, with applications ranging from human ear, cardiovascular, to neurovascular research. Finite Element (FE) Biosimulation experiments produce huge amounts of numerical data. Visualising and analysing this huge numerical biosimulation data is a strenuous task. In this paper, we present a Linked Data Visualiser-called SIFEM Visualiser-to help domain-experts (experts in the field of ear mechanics) and clinical practitioners (otorhinolaryngologists) to Visualise, analyse and compare biosimulation results from heterogeneous, complex, and high volume numerical data. The SIFEM visualiser builds on conceptualising different aspects of biosimulations. In addition to the visualiser, we also propose how biosimulation numerical data can be conceptualised, such that it sustains the visualisation of large numerical data. The SIFEM Visualiser aims to help domain scientists and clinical practitioners exploring and analysing Finite Element (FE) numerical data and simulation results obtained from different aspects of inner ear (Cochlear) model - such as biological,geometrical, mathematical, and physical models. We validate the SIFEM Visualiser in both dimensions of qualitative and quantitative evaluation.
收起